254 research outputs found

    Twentieth century shocks, trends and cycles in industrialized nations

    Get PDF
    Using annual data on real Gross Domestic Product per capita of seventeen industrialized nations in the twentieth century the empirical relevance of shocks, trends and cycles is investigated. A class of neural network models is specified as an extension of the class of vector autoregressive models in order to capture complex data patterns for different countries and subperiods. Empirical evidence indicates nonlinear positive trends in the levels of real GDP per capita, time varying growth rates, switching behavior of individual countries with respect to their position in the distribution of real GDP per capita levels over time and club behavior with respect to convergence. Such evidence presents great challenges for economic modelling, forecasting and policy analysis in the long run.Neural networks;Cycles;Nonlinear trends;Shocks

    On Bayesian structural inference in a simultaneous equation model

    Get PDF
    Econometric issues that are considered fundamental in the development of Bayesian structuralinference within a Simultaneous Equation Model are surveyed. The difficulty of specifying prior information which is of interest to economists and which yields tractable posterior and predictive distributions has started this line of research. A major issue is the nonstandard shape of the likelihood due to reduced rank restrictions. It implies that existence of structural posterior moments under vague prior information is a nontrivial issue. The problem is illustrated through simple examples using artificially generated data in a so-called limited information framework where the connection with the problem of weak instruments in classical econometrics is also described. A positive development is Bayesian inference of implied characteristics, in particular, dynamic features of a Simultaneous Equation Model. The potential of Bayesian structural inference, using a predictive approach for prior specification and using Monte Carlo simulation techniques for computational purposes, is illustrated by means of a prior and posterior analysis of the US business cycle in the period of the depression. A structural prior is elicited through investigation of the implied predictive features.

    Bayes estimates of Markov trends in possibly cointegrated series: an application to US consumption and income

    Get PDF
    Stylized facts show that average growth rates of US per capita consumption and income differ in recession and expansion periods. Since a linear combination of such series does not have to be a constant mean process, standard cointegration analysis between the variables to examine the permanent income hypothesis may not be valid. To model the changing growth rates in both series, we introduce a multivariate Markov trend model, which accounts for different growth rates in consumption and income during expansions and recessions and across variables within both regimes. The deviations from the multivariate Markov trend are modeled by a vector autoregressive model. Bayes estimates of this model are obtained using Markov chain Monte Carlo methods. The empirical results suggest the existence of a cointegration relation between US per capita disposable income and consumption, after correction for a multivariate Markov trend. This results is also obtained when per capita investment is added to the vector autoregression.Cointegration;MCMC;Permanent income hypothesis;Multivariate Markov trend

    Bayes model averaging of cyclical decompositions in economic time series

    Get PDF
    A flexible decomposition of a time series into stochastic cycles under possible non-stationarity is specified, providing both a useful data analysis tool and a very wide model class. A Bayes procedure using Markov Chain Monte Carlo (MCMC) is introduced with a model averaging approach which explicitly deals with the uncertainty on the appropriate number of cycles. The convergence of the MCMC method is substantially accelerated through a convenient reparametrization based on a hierarchical structure of variances in a state space model. The model and corresponding inferential procedure are applied to simulated data and to economic time series like industrial production, unemployment and real exchange rates. We derive the implied posterior distributions of model parameters and some relevant functions thereof, shedding light on a wide range of key features of each economic time series.model averaging;Markov Chain Monte Carlo;state space models;Fourier analysis;time series decomposition

    The value of structural information in the VAR model

    Get PDF
    Economic policy decisions are often informed by empirical economic analysis. While the decision-maker is usually only interested in good estimates of outcomes, the analyst is interested in estimating the model. Accurate inference on the structural features of a model, such as cointegration, can improve policy analysis as it can improve estimation, inference and forecast efficiency from using that model. However, using a model does not guarantee good estimates of the object of interest and, as it assigns a probability of one to a model and zero to near-by models, takes extreme zero-one account of the "weight of evidence" in the data and the resarcher's uncertainty. By using the uncertainty associated with the structural features in a model set, one obtains policy analysis that is not conditional on the structure of the model and can improve efficiency if the features are appropriately weighted. In this paper tools are presented to allow for unconditional inference on the vector autoregressive (VAR) model. In particular, we employ measures on manifolds to elicit priors on subspaces defined by particular features of the VAR model. The features considered are cointegration, exogeneity, deterministic processes and overidentification. Two applications -- money demand in Australia, and a macroeconomic model of the UK proposed by Garratt, Lee, Persaran, and Shin (2002) are used to illustrate the feasibility of the proposed methods.cointegration;model averaging;exogeneity;Laplace approximation;posterior probabilities;structural modelling

    Note on neural network sampling for Bayesian inference of mixture processes

    Get PDF
    In this paper we show some further experiments with neural network sampling,a class of sampling methods that make use of neural network approximationsto (posterior) densities, introduced by Hoogerheide et al. (2007). We considera method where a mixture of Student's t densities, which can be interpreted asa neural network function, is used as a candidate density in importance samplingor the Metropolis-Hastings algorithm. It is applied to an illustrative2-regime mixture model for the US real GNP growth rate. We explain thenon-elliptical shapes of the posterior distribution, and show that the proposedmethod outperforms Gibbs sampling with data augmentation and the griddy Gibbs sampler.

    Weakly informative priors and well behaved Bayes factors

    Get PDF
    Bartlett's paradox has been taken to imply that using improper priors results in Bayes factors that are not well defined, preventing model comparison in this case. We use well understood principles underlying what is already common practice, to demonstrate that this implication is not true for some improper priors, such as the Shrinkage prior due to Stein (1956). While this result would appear to expand the class of priors that may be used for computing posterior odds, we warn against the straightforward use of these priors. Highlighting the role of the prior measure in the behaviour of Bayes factors, we demonstrate pathologies in the prior measures for these improper priors. Using this discussion, we then propose a method of employing such priors by setting rules on the rate of diffusion of prior certainty.Bayes factor;improper prior;marginal likelihood;shrinkage prior

    A reconsideration of the Angrist-Krueger analysis on returns to education

    Get PDF
    In this paper we reconsider the analysis of the effect of education on income by Angrist and Krueger (1991). In order to account for possible endogeneity of the education spell, these authors use quarter of birth to form valid instruments. Angristand Krueger apply a classical method, two-stage least-squares (2SLS), and consider results for data sets on individuals from all states of the US. In this paper the research by Angrist and Krueger is extended both in a methodological and an empirical way. Classical as well as Bayesian methods are used. Bayesian results under the Jeffreys prior are emphasized, as these results are valid in finite samples and because in the instrumental variables (IV) regression model the Jeffreys prior is in a certain sense, truly, non-informative. Further, it is considered how results vary between subsets of the data corresponding to regions of the US. Finally, some assumptions of Angrist and Krueger are investigated and it is examined if one could still obtain usable results if some assumptions are dropped. Our main findings are: (1) The Angrist-Krueger results on returns to education for the USA are almost completely determined by data from a few Southern states; (2) The conclusion of Bound, Jaeger and Baker (1995), that the instruments of Angrist and Krueger give hardly any usable information concerning the causal effect of educationon wages, is too strong. A model of Angrist and Krueger (or a slightly modified version)can give usable information on the causal effect of education on income in the Southern region of the US;(3) The instruments for education that are based on quarter of birth are stronger for people with at most 8 or at least 14 years of education than for people with 9-13 years of education. This suggests that quarter of birth does not only affect the number ofcompleted years of schooling for those who leave school as soon as the law allows for it,as these persons usually have completed 9-13 years of education. Therefore, if one intends to increase the understanding of the working of the quarter-of-birth instruments,it is a better idea to focus on differences between states in school entry requirementsand/or compulsory schooling laws for children of age 5-7 than to concentrate on thedifferences in compulsory schooling laws for students of age 16-18.

    Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    Get PDF
    A Bayesian model averaging procedure is presented within the class ofvector autoregressive (VAR) processes and applied to two empirical issues.First, stability of the "Great Ratios" in U.S. macro-economic time series isinvestigated, together with the presence and e¤ects of permanent shocks.Measures on manifolds are employed in order to elicit uniform priors onsubspaces defned by particular structural features of linear VARs. Second,the VAR model is extended to include a smooth transition function in a(monetary) equation and stochastic volatility in the disturbances. The riskof a liquidity trap in the USA, UK and Japan is evaluated, together with theexpected cost of a policy adjustment of central banks. Posterior probabilitiesof different models are evaluated using Markov chain Monte Carlo techniques.cointegration;Grassman manifold;great ratios;impulse response;liquidity trap;model averaging;posterior probability;stochastic trend;orthogonal group;vector autoregressive model

    A Bayesian analysis of the PPP puzzle using an unobserved components model

    Get PDF
    The failure to describe the time series behaviour of most real exchange rates as temporary deviations from fixed long-term means may be due to time variation of the equilibria themselves, see Engel (2000). We implement this idea using an unobserved components model and decompose the observations on real exchange rates in long-term components, which capture the time-variation ofthe mean and in medium and short-term components which measure temporary deviations. A simulation-based Bayesian analysis is introduced to compute the posterior distribution of (functions) of the model parameters. A stationarity test in this setup indicates that the mean is slowly time-varying. Subsequently, we use our flexible model to derive the implied distributions of some keyfeatures of real exchange rates. Most notably, the half-life of deviations from the mean, which is a measure of persistence, is lowered. This provides a possible explanation for the PPP puzzle.Gibbs sampling;purchasing power parity puzzle;real exchange rate;time-varying mean
    corecore